[ET-VK][export] Update tensor representation sync logic to allow for flexibility in memory layouts#17564
[ET-VK][export] Update tensor representation sync logic to allow for flexibility in memory layouts#17564
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/17564
Note: Links to docs will display an error until the docs builds have been completed. ❌ 4 New Failures, 1 Unrelated FailureAs of commit eed7b1c with merge base 9a58ce8 ( NEW FAILURES - The following jobs have failed:
BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
…flexibility in memory layouts Pull Request resolved: #17564 The tag_memory_meta_pass determines which storage type and memory layout to use for each tensor in the graph. Previously, OpRepSets enforced that "synced" tensors (e.g. all inputs to a binary op) use the exact same storage type AND memory layout by collapsing them into a single shared TensorRepSet. This was overly restrictive for quantized operators like q8ta_add, where inputs and outputs must share the same packed dimension but are allowed to use different memory layouts (e.g. input A uses PACKED_INT8_4W4C, input B uses PACKED_INT8_4C1W, output uses PACKED_INT8_4C1W). This diff introduces PackedDimInfo, a Python-side mirror of the C++ PackedDimInfo struct in Tensor.h, which captures the packed dimension and block size for each memory layout. The sync logic is rewritten so that synced tensors are constrained to have "compatible" packed dim info (same packed_dim and packed_dim_block_size) rather than identical memory layouts. This is achieved through three new TensorRepSet methods: has_same_packed_dim_info_set checks exact PDI equality, has_compatible_packed_dim_info_set checks superset containment, and filter_for_compatible_packed_dim_infos narrows a repset to only layouts with compatible PDIs. The OpRepSets initialization now stores individual repsets per arg/output instead of collapsing synced groups into a single object, and constraint propagation uses packed-dim filtering. The tag_memory_meta_pass is simplified to always call constrain_op_out_repset since the new OpRepSets sync logic handles propagation internally. Also renames make_filtered_tensor_repset to filter_invalid_reprs for clarity and adds comprehensive unit tests for TensorRepSet, TensorRepSetList, OpRepSets, and TensorReprList. Authored with Claude. ghstack-source-id: 343460524 @exported-using-ghexport Differential Revision: [D93768636](https://our.internmc.facebook.com/intern/diff/D93768636/)
Pull Request resolved: #17565 Add a new q8ta_linear operator that performs fully quantized int8 linear (matmul + bias) with per-tensor activation quantization and per-channel weight quantization, producing int8 output. This enables back-to-back quantized linear layers without intermediate dequantize/quantize steps. The operator reuses the existing tiled int8 linear GLSL headers (input/weight tile loading, int8 dot product accumulation, weight scales/sums/bias loading) and adds output quantization via quantize_and_pack to produce packed int8 output. The fusion pass in quantized_linear.py detects the q→dq→linear→q pattern (where the output quantize node comes from a subsequent quantized op's input) and fuses it into a single q8ta_linear call. This diff was authored with Claude. ghstack-source-id: 343460521 @exported-using-ghexport Differential Revision: [D93768642](https://our.internmc.facebook.com/intern/diff/D93768642/)
Pull Request resolved: #17566 Add a cooperative GEMV variant of q8ta_linear optimized for batch size 1. The existing q8ta_linear uses a tiled algorithm with 4H4W packed int8 layout, which is inefficient for single-row inputs because it wastes 3/4 of each ivec4 block. The new q8ta_linear_gemv uses 4W packed int8 layout (scalar int[] buffers) and a cooperative algorithm where 64 threads split the K reduction dimension with shared memory tree reduction. The shader loads one packed int32 (4 int8 values) per thread per K iteration and accumulates dot products against the weight tile using dotPacked4x8AccSatEXT. After reduction, thread 0 applies scales, zero points, bias, and quantizes the output. The pattern matcher in quantized_linear.py selects q8ta_linear_gemv when the input batch dimension is 1, falling back to q8ta_linear for larger batches. Also adds PACKED_INT8_4W (value 5) to the serialization schema to support the 4W memory layout in the export pipeline. Authored with Claude. ghstack-source-id: 343460519 @exported-using-ghexport Differential Revision: [D93768643](https://our.internmc.facebook.com/intern/diff/D93768643/)
Pull Request resolved: #17567 QuantizedLinearMatch always used args[1] for the weight and args[0] for the input, which is correct for mm(input, weight) and linear(input, weight, bias?) but wrong for addmm(bias, input, weight) where the weight is at args[2] and the input is at args[1]. This was exposed by a torchao change (D69887498) that added Linear+BatchNorm fusion to prepare_pt2e(). The fusion adds a bias to Linear nodes that previously had none, causing them to decompose to addmm instead of mm in the edge dialect. The pattern matcher then read the input's per-tensor dequantize scale (a float literal) as if it were the weight's per-channel scale (a Node), causing an assertion failure. The fix determines the correct arg indices based on whether the anchor node is addmm. The bias handling at args[0] for addmm was already correct. Authored-by: Claude When two quantized linears are chained (e.g. in the SceneX prediction head), the pattern registry processes them in topological order and applies replacements immediately. The first linear's replacement calls `replace_all_uses_with`, which rewrites the dq node's input from the original quantize op to the new q8ta_linear op. When the second linear is then matched, `maybe_skip_q_dq_arg_chain` traces back through the dq node and finds the q8ta_linear op instead of the original quantize op. The code then extracts scale/zp from args[1]/args[2] of that node, but q8ta_linear has a different args layout than quantize_per_tensor— args[1]/args[2] are the first linear's INPUT scale/zp, not its OUTPUT scale/zp. This causes the second linear to wildly misinterpret its input values, saturating outputs to -128/127. The fix reads input scale/zp from the dq node's args instead of the quantize node's args. The dq node always retains the correct scale/zp because `replace_all_uses_with` only rewrites its input tensor (args[0]), not the scale/zp args. This is both simpler and more robust than special-casing the q8ta_linear args layout. Pull Request resolved: #17567 QuantizedLinearMatch always used args[1] for the weight and args[0] for the input, which is correct for mm(input, weight) and linear(input, weight, bias?) but wrong for addmm(bias, input, weight) where the weight is at args[2] and the input is at args[1]. This was exposed by a torchao change (D69887498) that added Linear+BatchNorm fusion to prepare_pt2e(). The fusion adds a bias to Linear nodes that previously had none, causing them to decompose to addmm instead of mm in the edge dialect. The pattern matcher then read the input's per-tensor dequantize scale (a float literal) as if it were the weight's per-channel scale (a Node), causing an assertion failure. The fix determines the correct arg indices based on whether the anchor node is addmm. The bias handling at args[0] for addmm was already correct. Authored-by: Claude ghstack-source-id: 343460523 @exported-using-ghexport Differential Revision: [D93768640](https://our.internmc.facebook.com/intern/diff/D93768640/)
Pull Request resolved: #17568 The q8ta_conv2d operator previously always delegated to the general (sliding window) implementation, even though the im2col implementation is 2-5x faster for non-grouped convolutions with in_channels % 4 == 0. This change adds runtime auto-selection logic that checks the groups parameter and input channel alignment, then dispatches to q8ta_conv2d_im2col when its constraints are met. On ResNet50 int8, this reduces Vulkan inference latency from 14.2ms to 6.8ms (2.1x speedup) on Samsung Galaxy S24, making it 30% faster than XNNPACK (9.7ms). Also adds performance test cases for deep-channel small-spatial scenarios (512ch 7x7, 1024→2048ch 1x1 stride-2) that stress-test the optimization. ghstack-source-id: 343460520 @exported-using-ghexport Differential Revision: [D93768637](https://our.internmc.facebook.com/intern/diff/D93768637/)
af616b2 to
eed7b1c
Compare
…flexibility in memory layouts Pull Request resolved: #17564 The tag_memory_meta_pass determines which storage type and memory layout to use for each tensor in the graph. Previously, OpRepSets enforced that "synced" tensors (e.g. all inputs to a binary op) use the exact same storage type AND memory layout by collapsing them into a single shared TensorRepSet. This was overly restrictive for quantized operators like q8ta_add, where inputs and outputs must share the same packed dimension but are allowed to use different memory layouts (e.g. input A uses PACKED_INT8_4W4C, input B uses PACKED_INT8_4C1W, output uses PACKED_INT8_4C1W). This diff introduces PackedDimInfo, a Python-side mirror of the C++ PackedDimInfo struct in Tensor.h, which captures the packed dimension and block size for each memory layout. The sync logic is rewritten so that synced tensors are constrained to have "compatible" packed dim info (same packed_dim and packed_dim_block_size) rather than identical memory layouts. This is achieved through three new TensorRepSet methods: has_same_packed_dim_info_set checks exact PDI equality, has_compatible_packed_dim_info_set checks superset containment, and filter_for_compatible_packed_dim_infos narrows a repset to only layouts with compatible PDIs. The OpRepSets initialization now stores individual repsets per arg/output instead of collapsing synced groups into a single object, and constraint propagation uses packed-dim filtering. The tag_memory_meta_pass is simplified to always call constrain_op_out_repset since the new OpRepSets sync logic handles propagation internally. Also renames make_filtered_tensor_repset to filter_invalid_reprs for clarity and adds comprehensive unit tests for TensorRepSet, TensorRepSetList, OpRepSets, and TensorReprList. Authored with Claude. ghstack-source-id: 343460524 @exported-using-ghexport Differential Revision: [D93768636](https://our.internmc.facebook.com/intern/diff/D93768636/)
Stack from ghstack (oldest at bottom):
The tag_memory_meta_pass determines which storage type and memory layout to use for each tensor in the graph. Previously, OpRepSets enforced that "synced" tensors (e.g. all inputs to a binary op) use the exact same storage type AND memory layout by collapsing them into a single shared TensorRepSet. This was overly restrictive for quantized operators like q8ta_add, where inputs and outputs must share the same packed dimension but are allowed to use different memory layouts (e.g. input A uses PACKED_INT8_4W4C, input B uses PACKED_INT8_4C1W, output uses PACKED_INT8_4C1W).
This diff introduces PackedDimInfo, a Python-side mirror of the C++ PackedDimInfo struct in Tensor.h, which captures the packed dimension and block size for each memory layout. The sync logic is rewritten so that synced tensors are constrained to have "compatible" packed dim info (same packed_dim and packed_dim_block_size) rather than identical memory layouts. This is achieved through three new TensorRepSet methods: has_same_packed_dim_info_set checks exact PDI equality, has_compatible_packed_dim_info_set checks superset containment, and filter_for_compatible_packed_dim_infos narrows a repset to only layouts with compatible PDIs.
The OpRepSets initialization now stores individual repsets per arg/output instead of collapsing synced groups into a single object, and constraint propagation uses packed-dim filtering. The tag_memory_meta_pass is simplified to always call constrain_op_out_repset since the new OpRepSets sync logic handles propagation internally.
Also renames make_filtered_tensor_repset to filter_invalid_reprs for clarity and adds comprehensive unit tests for TensorRepSet, TensorRepSetList, OpRepSets, and TensorReprList.
Authored with Claude.
Differential Revision: D93768636